28 research outputs found

    Prediction in Photovoltaic Power by Neural Networks

    Get PDF
    The ability to forecast the power produced by renewable energy plants in the short and middle term is a key issue to allow a high-level penetration of the distributed generation into the grid infrastructure. Forecasting energy production is mandatory for dispatching and distribution issues, at the transmission system operator level, as well as the electrical distributor and power system operator levels. In this paper, we present three techniques based on neural and fuzzy neural networks, namely the radial basis function, the adaptive neuro-fuzzy inference system and the higher-order neuro-fuzzy inference system, which are well suited to predict data sequences stemming from real-world applications. The preliminary results concerning the prediction of the power generated by a large-scale photovoltaic plant in Italy confirm the reliability and accuracy of the proposed approaches

    Distributed Learning for Multiple Source Data

    Get PDF
    Distributed learning is the problem of inferring a function when data to be analyzed is distributed across a network of agents. Separate domains of application may largely impose different constraints on the solution, including low computational power at every location, limited underlying connectivity (e.g. no broadcasting capability) or transferability constraints related to the enormous bandwidth requirement. Thus, it is no longer possible to send data in a central node where traditionally learning algorithms are used, while new techniques able to model and exploit locally the information on big data are necessary. Motivated by these observations, this thesis proposes new techniques able to efficiently overcome a fully centralized implementation, without requiring the presence of a coordinating node, while using only in-network communication. The focus is given on both supervised and unsupervised distributed learning procedures that, so far, have been addressed only in very specific settings only. For instance, some of them are not actually distributed because they just split the calculation between different subsystems, others call for the presence of a fusion center collecting at each iteration data from all the agents; some others are implementable only on specific network topologies such as fully connected graphs. In the first part of this thesis, these limits have been overcome by using spectral clustering, ensemble clustering or density-based approaches for realizing a pure distributed architecture where there is no hierarchy and all agents are peer. Each agent learns only from its own dataset, while the information about the others is unknown and obtained in a decentralized way through a process of communication and collaboration among the agents. Experimental results, and theoretical properties of convergence, prove the effectiveness of these proposals. In the successive part of the thesis, the proposed contributions have been tested in several real-word distributed applications. Telemedicine and e-health applications are found to be one of the most prolific area to this end. Moreover, also the mapping of learning algorithms onto low-power hardware resources is found as an interesting area of applications in the distributed wireless networks context. Finally, a study on the generation and control of renewable energy sources is also analyzed. Overall, the algorithms presented throughout the thesis cover a wide range of possible practical applications, and trace the path to many future extensions, either as scientific research or technological transfer results

    A comparison of machine learning classifiers for smartphone-based gait analysis

    Get PDF
    This paper proposes a reliable monitoring scheme that can assist medical specialists in watching over the patient's condition. Although several technologies are traditionally used to acquire motion data of patients, the high costs as well as the large spaces they require make them difficult to be applied in a home context for rehabilitation. A reliable patient monitoring technique, which can automatically record and classify patient movements, is mandatory for a telemedicine protocol. In this paper, a comparison of several state-of-the-art machine learning classifiers is proposed, where stride data are collected by using a smartphone. The main goal is to identify a robust methodology able to assure a suited classification of gait movements, in order to allow the monitoring of patients in time as well as to discriminate among a pathological and physiological gait. Additionally, the advantages of smartphones of being compact, cost-effective and relatively easy to operate make these devices particularly suited for home-based rehabilitation programs. Graphical Abstract. This paper proposes a reliable monitoring scheme that can assist medical specialists in watching over the patient's condition. Although several technologies are traditionally used to acquire motion data of patients, the high costs as well as the large spaces they require make them difficult to be applied in a home context for rehabilitation. A reliable patient monitoring technique, which can automatically record and classify patient movements, is mandatory for a telemedicine protocol. In this paper, a comparison of several state-of-the-art machine learning classifiers is proposed, where stride data are collected and processed by using a smartphone(see figure). The main goal is to identify a robust methodology able to assure a suited classification of gait movements, in order to allow the monitoring of patients in time as well as to discriminate among a pathological and physiological gait. Additionally, the advantages of smartphones of being compact, cost-effective and relatively easy to operate make these devices particularly suited for home-based rehabilitation programs

    A smartphone-based application using machine learning for gesture recognition. Using feature extraction and template matching via Hu image moments to recognize gestures

    No full text
    The rapid development of smart devices, such as smartphones and tablets, leads to new challenges and ushers in a new stage of human-computer interaction. In this context, it becomes essential to develop methods and techniques for a better and more natural interaction with these devices. In this article, we address the problem of gesture segmentation and recognition, taking into account the limited computational resources of smartphone devices. We introduce a methodology for designing efficient and useful applications that, by using low-cost and widely diffused technologies, can be used in telemedicine, home-based rehabilitation, and other biomedical applications for patients with specific disabilities. To this end, we have designed a new machine-learning algorithm that is able to identify hand gestures through the use of Hu image moments, due to their invariance to rotation, translation, scaling, and their low computational cost. The experimental results collected from a case study show an excellent gesture recognition performance and an affordable real-time execution speed on smartphones and other mobile devices

    Selection of clinical features for pattern recognition applied to gait analysis

    No full text
    This paper deals with the opportunity of extracting useful information from medical data retrieved directly from a stereophotogrammetric system applied to gait analysis. A feature selection method to exhaustively evaluate all the possible combinations of the gait parameters is presented, in order to find the best subset able to classify among diseased and healthy subjects. This procedure will be used for estimating the performance of widely used classification algorithms, whose performance has been ascertained in many real- world problems with respect to wellknown classification benchmarks, both in terms of number of selected features and classification accuracy. Precisely, support vector machine, Naive Bayes and K nearest neighbor classifiers can obtain the lowest classification error, with an accuracy greater than 97 %. For the considered classification problem, the whole set of features will be proved to be redundant and it can be significantly pruned. Namely, groups of 3 or 5 features only are able to preserve high accuracy when the aim is to check the anomaly of a gait. The step length and the swing speed are the most informative features for the gait analysis, but also cadenceand stride may add useful information for the movement evaluation

    Finite precision implementation of random vector functional-link networks

    No full text
    The increasing amount of data to be processed coming from multiple sources, as in the case of sensor networks, and the need to cope with constraints of security and privacy, make necessary the use of computationally efficient techniques on simple and cheap hardware architectures often distributed in pervasive scenarios. Random Vector Functional-Link is a neural network model usually adopted for processing distributed big data, but no constraints have been considered so far to deal with limited hardware resources. This paper is focused on implementing a modified version of the Random Vector Functional-Link network with finite precision arithmetic, in order to make it suited to hardware architectures even based on a simple microcontroller. A genetic optimization is also proposed to ensure that the overall performance is comparable with standard software implementations. The numerical results prove the efficacy of the proposed approach

    Distributed on-line learning for random-weight fuzzy neural networks

    No full text
    The Random-Weight Fuzzy Neural Network is an inference system where the fuzzy rule parameters of antecedents (i.e., membership functions) are randomly generated and the ones of consequents are estimated using a Regularized Least Squares algorithm. In this regard, we propose an on-line learning algorithm under the hypothesis of training data distributed across a network of interconnected agents. In particular, we assume that each agent in the network receives a stream of data as a sequence of mini-batches. When receiving a new chunk of data, each agent updates its estimate of the consequent parameters and, periodically, all agents agree on a common model through the Distributed Average Consensus protocol. The learning algorithm is faster than a solution based on a centralized training set and it does not rely on any coordination authority. The experimental results on well-known datasets validate our proposal

    A nonuniform quantizer for hardware implementation of neural networks

    No full text
    New trends in neural computation, now dealing with distributed learning on pervasive sensor networks and multiple sources of big data, make necessary the use of computationally efficient techniques to be implemented on simple and cheap hardware architectures. In this paper, a nonuniform quantization at the input layer of neural networks is introduced, in order to optimize their implementation on hardware architectures based on a finite precision arithmetic. Namely, we propose a nonlinear A/D conversion of input signals by considering the actual structure of data to be processed. Random Vector Functional-Link is considered as the reference model for neural networks and a genetic optimization is adopted for determining the quantization levels to be found. The proposed approach is assessed by several experimental results obtained on well-known benchmarks for the general problem of data regression

    A new learning approach for Takagi-Sugeno fuzzy systems applied to time series prediction

    No full text
    In this paper, we present a study on the use of fuzzy neural networks and their application to the prediction of times series generated by complex processes of the real-world. The new learning strategy is suited to any fuzzy inference model, especially in the case of higher-order Sugeno-type fuzzy rules. The data considered herein are real-world cases concerning chaotic benchmarks as well as environmental time series. The comparison with respect to well-known neural and fuzzy neural models will prove that our approach is able to follow the behavior of the underlying, unknown process with a good prediction of the observed time series
    corecore